299 research outputs found
Instability of an inverse problem for the stationary radiative transport near the diffusion limit
In this work, we study the instability of an inverse problem of radiative
transport equation with angularly averaged measurement near the diffusion
limit, i.e. the normalized mean free path (the Knudsen number) 0 < \eps \ll
1. It is well-known that there is a transition of stability from H\"{o}lder
type to logarithmic type with \eps\to 0, the theory of this transition of
stability is still an open problem. In this study, we show the transition of
stability by establishing the balance of two different regimes depending on the
relative sizes of \eps and the perturbation in measurements. When \eps is
sufficiently small, we obtain exponential instability, which stands for the
diffusive regime, and otherwise we obtain H\"{o}lder instability instead, which
stands for the transport regime.Comment: 20 page
Recommended from our members
Marchenko-Pastur law with relaxed independence conditions
We prove the Marchenko-Pastur law for the eigenvalues of sample
covariance matrices in two new situations where the data does not have
independent coordinates. In the first scenario - the block-independent model -
the coordinates of the data are partitioned into blocks in such a way that
the entries in different blocks are independent, but the entries from the same
block may be dependent. In the second scenario - the random tensor model - the
data is the homogeneous random tensor of order , i.e. the coordinates of the
data are all different products of variables chosen from a
set of independent random variables. We show that Marchenko-Pastur law
holds for the block-independent model as long as the size of the largest block
is and for the random tensor model as long as . Our main
technical tools are new concentration inequalities for quadratic forms in
random variables with block-independent coordinates, and for random tensors
Variational Hamiltonian Monte Carlo via Score Matching
Traditionally, the field of computational Bayesian statistics has been
divided into two main subfields: variational methods and Markov chain Monte
Carlo (MCMC). In recent years, however, several methods have been proposed
based on combining variational Bayesian inference and MCMC simulation in order
to improve their overall accuracy and computational efficiency. This marriage
of fast evaluation and flexible approximation provides a promising means of
designing scalable Bayesian inference methods. In this paper, we explore the
possibility of incorporating variational approximation into a state-of-the-art
MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient
computation in the simulation of Hamiltonian flow, which is the bottleneck for
many applications of HMC in big data problems. To this end, we use a {\it
free-form} approximation induced by a fast and flexible surrogate function
based on single-hidden layer feedforward neural networks. The surrogate
provides sufficiently accurate approximation while allowing for fast
exploration of parameter space, resulting in an efficient approximate inference
algorithm. We demonstrate the advantages of our method on both synthetic and
real data problems
Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases
For big data analysis, high computational cost for Bayesian methods often
limits their applications in practice. In recent years, there have been many
attempts to improve computational efficiency of Bayesian inference. Here we
propose an efficient and scalable computational technique for a
state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian
Monte Carlo (HMC). The key idea is to explore and exploit the structure and
regularity in parameter space for the underlying probabilistic model to
construct an effective approximation of its geometric properties. To this end,
we build a surrogate function to approximate the target distribution using
properly chosen random bases and an efficient optimization process. The
resulting method provides a flexible, scalable, and efficient sampling
algorithm, which converges to the correct target distribution. We show that by
choosing the basis functions and optimization process differently, our method
can be related to other approaches for the construction of surrogate functions
such as generalized additive models or Gaussian process models. Experiments
based on simulated and real data show that our approach leads to substantially
more efficient sampling algorithms compared to existing state-of-the art
methods
A new phase space method for recovering index of refraction from travel times
We develop a new phase space method for reconstructing the index of refraction of a medium from travel time measurements. The method is based on the so-called Stefanov–Uhlmann identity which links two Riemannian metrics with their travel time information. We design a numerical algorithm to solve the resulting inverse problem. The new algorithm is a hybrid approach that combines both Lagrangian and Eulerian formulations. In particular the Lagrangian formulation in phase space can take into account multiple arrival times naturally, while the Eulerian formulation for the index of refraction allows us to compute the solution in physical space. Numerical examples including isotropic metrics and the Marmousi synthetic model are shown to validate the new method
- …